The system run different processes depending on the applications running. The operations of these applications and their respective processes is execute through system calls. There are wide range of system calls that can be run by the OS. In general the frequent types of system calls can provide a general chaterisationof the workload running on the OS. The workload charaterisation is a good starting point in understanding the system or applications running the system. In addition to the frequent system calls, details on the processes making syscalls is helpful in understanding the system. The latency of the both the system calls and the processes making system calls is a starting point in understand the latency of the system as a whole. From these results we can go further to look at the performance results of the different compute resources. The chaterisations helps in knowing the compute results to focus on e.g., if there is a load or read syscalls we can focus on Filesystem and cache.
The CPU is responsible for executing all workloads on the NFV. Like other resources, the CPU is managed by the kernel. The user-level applications access CPU resources by sending system calls to the kernel. The kernel also receives other system call requests from different processes; memory loads and stores can issue page faults system calls. The primary consumers of CPU resources are threads (also called tasks), which belong to procedures, kernel routines and interrupt routes. The kernel manages the sharing via a CPU scheduler.
There are three thread states: ON-PROC for threads running on a CPU, RUNNABLE for threads that could run but are waiting their turn, and SLEEP for blocked lines on another event, including uninterruptible waits. These can be categorised into two for more accessible analysis, on-CPU referring to ON-PROC, and off-CPU referring to all other states, where the thread is not running on a CPU. Lines leave the CPU in one of two ways: (1) voluntary if they block on I/O, a lock, or asleep, or (2) involuntary if they have exceeded their scheduled allocation of CPU time. When a CPU switches from running one process or thread to another, it switches address spaces and other metadata. This process is called context switching; it also consumes the CPU resources. All these processes, described, in general, consume the CPU time. In addition to the time, another CPU resource used by the methods, kernel routines and interrupts routines is the CPU cache.
There are typically multiple levels of CPU cache, increasing in both size and latency. The caches end with the last-level store (LLC), large (Mbytes) and slower. On a processor with three levels of supplies, the LLC is also the Level 3 cache. Processes are instructions to be interpreted and run by the CPU. This set of instructions is typically loaded from RAM and cached into the CPU cache for faster access. The CPU first checks the lower cache, i.e., L1 cache. If the CPU finds the data, this is called a hit. If the CPU does not see the data, it looks for it in L2 and then L3. If the CPU does not find the data in any memory caches, it can access it from your system memory (RAM). When that happens, it is known as a cache miss. In general, a cache miss means high latency, i.e., the time needed to access data from memory.
The kernel and processor are responsible for mapping the virtual memory to physical memory. For efficiency, memory mappings are created in groups of memory called pages. When an application starts, it begins with a request for memory allocation. In the case that there is no free memory on the heap, the syscall brk() is issued to extend the size of the bank. However, if there is free memory on the heap, a new memory segment is created via the mmap() syscall. Initially, this virtual memory mapping does not have a corresponding physical memory allocation. Therefore when the application tries to access this allocated memory segment, the error called page fault occurs on the MMU. The kernel then handles the page fault, mapping from the virtual to physical memory. The amount of physical memory allocated to a process is called resident set size (RSS). When there is too much memory demand on the system, the kernel page-out daemon (kswapd) may look for memory pages to free. Three types of pages can be released in their order: pages that we read but not modified (backed by disk) these can be immediately rid, pages that have been modified (dirty) these need to be written to disk before they can be freed and pages of application memory (anonymous) these must be stored on a swap device before they can be released. kswapd, a page-out daemon, runs periodically to scan for inactive and active pages with no memory to free. It is woken up when free memory crosses a low threshold and goes back to sleep when it crosses a high threshold. Swapping usually causes applications to run much more slowly.
The file system that applications usually interact with directly and file systems can use caching, read-ahead, buffering, and asynchronous I/O to avoid exposing disk I/O latency to the application. Logical I/O describes requests to the file system. If these requests must be served from the storage devices, they become physical I/O. Not all I/O will; many logical read requests may be returned from the file system cache and never become physical I/O. File systems are accessed via a virtual file system (VFS). It provides operations for reading, writing, opening, closing, etc., which are mapped by file systems to their internal functions. Linux uses multiple caches to improve the performance of storage I/O via the file system. These are Page cache: This contains virtual memory pages and enhances the performance of file and directory I/O. Inode cache, which are data structures used by file systems to describe their stored objects. The directory cache caches mappings from directory entry names to VFS inodes, improving the performance of pathname lookups. The page cache grows to be the largest of all these because it caches the contents of files and includes “dirty” pages that have been modified but not yet written to disk.
Linux exposes rotational magnetic media, flash-based storage, and network storage as storage devices. Therefore, disk I/O refers to I/O operations on these devices. Disk I/O is a common source of performance issues because I/O latency on storage devices is orders of magnitude slower than the nanosecond or microsecond speed of CPU and memory operations. Block I/O refers to device access in blocks. I/O is queued and scheduled in the block layer. The wait time is spent in the block layer scheduler queues and device dispatcher queues from the operating system. Service time is the time from device issue to completion. This may include the time spent waiting in an on-device line. Request time is the overall time from when an I/O was inserted into the OS queues to its completion. The request time matters the most, as that is the time that applications must wait if I/O is synchronous.
Networking is a complex part of the Linux system. It involves many different layers and protocols, including the application, protocol libraries, syscalls, TCP or UDP, IP, and device drivers for the network interface. In general, the Networking system can be broken down into four. The NIC and Device Driver Processing first reads packets from the NIC and puts them into kernel buffers. Besides the NIC and Device driver, this process includes the DMA and particular memory regions on the RAM for storing receive and transmit packets called rings and the NAPI system for poling packets from these rings to the kernel buffers. It also incorporates some early packet processing hooks like XDP and AF\_XDP and can have custom drivers that bypass the kernel (i.e., the following two processes) like DPDK. Following is the Socket processing. This part also includes queuing and different queuing disciplines. It also incorporates some packet processing hooks like TC, Netfilter etc., which can alter the flow of the networking stack. After that is the Protocol processing layer, which applies functions for different IP and transport protocols, both these protocols run under the context of SoftIrq. Lastly is the application process. The application receives and sends packets on the destination socket
A flame graph visualizes a distributed request trace and represents each service call that occurred during the requests execution path with a timed, color-coded, horizontal bar. Flame graphs for distributed traces include error and latency data to help developers identify and fix bottlenecks in their applications..
To analyse the performance of the CPU along with the performance of the processes, kernel routines or interrupt routines, it is essential to look at some key insights from the performance data. Firstly, it is necessary to know the processes that have run and their age/lifespan. This can also be used to identify issues with short-lived processes. After which, it is essential to look at the time each of the methods spends waiting for its turn on the CPU. Given this information, we can also find the time that the processes spend off-CPU either voluntary or involuntary leaving the CPU, lastly in cases where we see performance issues from this data, e.g., a process consuming a lot of time on-CPU or a process spending a lot of time off-CPU. It would be helpful to analyse all code paths that are consuming CPU resources and the code path resulting in the process being off-CPU.
As mentioned earlier, another CPU resource outside time is the CPU cache. This can also be a factor for processes with multiple lifespans. To diagnose this, it is essential to get the number of cache misses of the LLC as well as the hit ratios of the LLC. However this two metrics cannot be collected on virtual machines like in the case of Cloud-Based NFVI. The prototyping system in this study collects, and presents this data, both as summary graphs and raw data from analysis
Memory operations can be frequent; therefore, to reduce overheads, it is important to look at some of the non-frequent events that can give insights into the performance of the memory resource. The relatively infrequent activities are: brk() and mmap() calls, page faults, and page-outs. An important is to know the number of memory requests that result in a new segment on the heap, i.e., requests for mappings. Following, it is beneficial to know the code path responsible for heap extension which can review the portion that resulted in extending the heap. Another import operation is page fault, which results in latency and growth of a process RSS. Likewise, it is important to know the code path responsible for page faults. As the system reclaims memory, we would also want to know the process affected and the latency: the time taken for the reclaim.
Firstly, we would want to characterisation virtual file system operations. This helps us know what the process is spending the most time doing on the filesystem - reads and writes (I/O), creates, opens, and syncs. After that, it is essential to know the size of data in for the read and write operations by the process names. This can assist in diagnosing the process responsible for the degraded filesystem performance in such cases. In the same manner, in the case of frequent VFS open operation, it is essential to know the processes responsible for opening files. While the earlier results can help in understanding the processes, it is also necessary to know the filenames with the most frequently read and written. This at a high level can expose some configuration errors, for example, verbose logging in production. This is the case for the Bind9 VNF in our case. Since sockets are also perceived as filenames, this can also show the frequency of sockets reads and writes. As mentioned earlier, the filesystem uses cache to avoid exposing disk I/O latency; therefore, another critical performance factor to consider is how the stock is performing. Applications are affected mainly by page cache; examining the page cache hit ratio over time can give insights on the NFV configuration tuning needed.
This spans everything: block device I/O (disk I/O), file system CPU cycles, file system locks, run queue latency, etc. It gives a measure of the latency suffered by an application reading from filesytem. This is a good starting point in understanding if the application's performance is affected or dependenat on the file system. In cases where the applications performance is impacted by filesystem access, the results below along with the results from the Disk I/O should be analysed to get a better observability of the points of performance degradation. This on shows common file system operations: reads, writes, opens, and fsyncs
This is instrumenting at the VFS interface, so this is reads and writes that may return entirely from the file system cache (page cache). Type refers to the type of file: R for regular files, S for sockets, and O for other (including pipes). The results are from vfs operations hence if files are read or written to using another means (e.g., via mmap()) they don't appear in these results.
This shows the overall system cache performance for the interval and duration of the expriment. The results are high level are helful in identifying if further investigation is needed. Where the number of misses are many, there will be need to look further at the results below.
This shows the processes that are responsible for the most hit and misses. This enables the developers to identify the processes that need further optimisation. The results also show the percentage of read or write hits, which helps with identifying if the process(es) are doing more writes than reads. This can expose some application configuration not meant for production e.g., a process with verbose logging. It can also allow the system admin to configure the system or to pick a sytsem for the workload.
To first evaluate the performance of Disk I/O, we would want to know the overall block I/O device latency. This refers to the time from issuing a request to the device, to when it completes, including time spent queued in the kernel, for each unique set of request flags e.g., Read, Write, Read-Ahead etc. While latency can show the overall performance of the Disk I/O, to remediate them, it would be essential to get more details, like (a) the random/sequential disk access patterns - this informs the disk mostly being used and the pattern as well as the data sizes. (a) the processes running and their Disk I/O requests. This will help in further identifying the processes that need to be optimised. (b) the processes performing I/O on disk and the bite-size. (c) the time requests were queued in the I/O scheduler in the block layer.
The biolatency tool traces block device I/O (disk I/O) and shows the time requests were queued in the I/O scheduler in the block layer. including time spent queued in the kernel
This shows the top 20 processes performing disk I/O operation per given interval (specified earlier). It shows the type of operation (read/write) and the average time taken in milliseconds (units are not shown on graph). The average time is the total time of the operation divided by the number of I/O. Hovering over the chart will show the Kbytes, number of I/O operation and the disk name. This information is uses for deducing the process(es) responsible or incurring the latency shown in the Block I/O device latency graph, thereby enabling further optimisation.
This shows the size distribution of the Block I/O. It helps in identifying the general distribution of the block I/O sizes and the respective distribution per process. This helps in identifying the process(es) with larger or smaller bite size IOPs for optimisation
To draw insights on the performance of the networking on the NFV and for the VNF, the initial bit is that we will need to know the number of packets being received and their sizes. After that, we would want to see the latency of the device queue, i.e., the time from when the packets have been pushed into the device layer for sending until the packages are sent out as signalled by NAPI. After that, we would want to know the time spent on the queuing disciplines. Next, we would want to see the latency for IP protocol connections and the process of making the connection. Following this, we would also like to know the lifespan of the kernel buffers that are used to pass packets across the networking stack. This can show the latency within the networking stack. While this shows the latency, it won’t show the packet drops. For this, it would be essential to know the number of packets and allocated size of the socket buffers and their limits. Packets are dropped when the socket limits have been reached.